Sparse representation-based classification (SRC), proposed by Wright et al.,seeks the sparsest decomposition of a test sample over the dictionary oftraining samples, with classification to the most-contributing class. Becauseit assumes test samples can be written as linear combinations of theirsame-class training samples, the success of SRC depends on the size andrepresentativeness of the training set. Our proposed classification algorithmenlarges the training set by using local principal component analysis toapproximate the basis vectors of the tangent hyperplane of the class manifoldat each training sample. The dictionary in SRC is replaced by a localdictionary that adapts to the test sample and includes training samples andtheir corresponding tangent basis vectors. We use a synthetic data set andthree face databases to demonstrate that this method can achieve higherclassification accuracy than SRC in cases of sparse sampling, nonlinear classmanifolds, and stringent dimension reduction.
展开▼